Crossmodal binding of fear in voice and face.
نویسندگان
چکیده
In social environments, multiple sensory channels are simultaneously engaged in the service of communication. In this experiment, we were concerned with defining the neuronal mechanisms for a perceptual bias in processing simultaneously presented emotional voices and faces. Specifically, we were interested in how bimodal presentation of a fearful voice facilitates recognition of fearful facial expression. By using event-related functional MRI, that crossed sensory modality (visual or auditory) with emotional expression (fearful or happy), we show that perceptual facilitation during face fear processing is expressed through modulation of neuronal responses in the amygdala and the fusiform cortex. These data suggest that the amygdala is important for emotional crossmodal sensory convergence with the associated perceptual bias during fear processing, being mediated by task-related modulation of face-processing regions of fusiform cortex.
منابع مشابه
Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as...
متن کاملThe neural network sustaining the crossmodal processing of human gender from faces and voices: An fMRI study
The aim of this fMRI study was to investigate the cerebral crossmodal interactions between human faces and voices during a gender categorization task. Twelve healthy male participants took part to the study. They were scanned in 4 runs that contained 3 conditions consisting in the presentation of faces, voices or congruent face-voice pairs. The task consisted in categorizing each trial (visual,...
متن کاملAdaptation Aftereffects in Vocal Emotion Perception Elicited by Expressive Faces and Voices
The perception of emotions is often suggested to be multimodal in nature, and bimodal as compared to unimodal (auditory or visual) presentation of emotional stimuli can lead to superior emotion recognition. In previous studies, contrastive aftereffects in emotion perception caused by perceptual adaptation have been shown for faces and for auditory affective vocalization, when adaptors were of t...
متن کاملMEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus.
An influential neural model of face perception suggests that the posterior superior temporal sulcus (STS) is sensitive to those aspects of faces that produce transient visual changes, including facial expression. Other researchers note that recognition of expression involves multiple sensory modalities and suggest that the STS also may respond to crossmodal facial signals that change transientl...
متن کاملCompensating for age limits through emotional crossmodal integration
Social interactions in daily life necessitate the integration of social signals from different sensory modalities. In the aging literature, it is well established that the recognition of emotion in facial expressions declines with advancing age, and this also occurs with vocal expressions. By contrast, crossmodal integration processing in healthy aging individuals is less documented. Here, we i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Proceedings of the National Academy of Sciences of the United States of America
دوره 98 17 شماره
صفحات -
تاریخ انتشار 2001